Engaging with Dr. Timnit Gebru
Kate Kenny
CS 0451
Part 1
This week, our machine learning class has the opportunity to hear from Dr. Timnit Gebru, a leader in the field of algorithmic bias. Dr. Gebru is the founder of the Distributed Artificial Intelligence Research Institute (DAIR) and the co-founder of Black in AI. Her work gained national attention after she left Google, where she co-lead the Ethical Artificial Intelligence Team, after the company had issues with a paper she co-authored on the dangers of large language models. She is the recipient of numerous awards and accolades for her work on bias both within technologies and within tech companies. I am very excited and thankful for the opportunity to speak with her in our class setting and to hear her speak to a broader audience at the college on Monday evening.
Dr. Gebru’s Talk: Fairness, Accountaility, Transparency, and Ethics in Computer Vision
Dr. Gebru states that the motivation for this talk was so much of the harm that has been done to Black Americans both through broad institutional racism and through the specific technologies that have been created without Black voices in the room. She states that this is a difficult topic for her, as a Black woman, to discuss in Computer Vision spaces since they are overwhelmingly white and male. She begins with the point that many people developing computer vision technology have hopes and concerns about the uses of these technologies but do not actually know what they are doing in practice.
Gebru states that the same techology will have different pros and cons to people coming from different backgrounds. Surviellance is one example of this, as people who are used to being in heavily survielled areas might immediately threats of computer vision that would not occus to people unfamiliar to such surviellance. Dr. Gebru gives many examples of technologies and startups that using photos or videos, filter or classify people in ways that perpetuate systemic racism and other inequalities. She gives examples in policing systems, automated hiring/interview technologies, and crime predictors based on facial features.
A major theme throughout the talk is that the work done by computer scientists is not abstract, but rather about and impacting real people. Additionally, there can be algorithms or technologies that work exactly as intented but still perpetuate systemic inequalities and have the largest impacts on already marginalized groups. Gebru asserts that people are often willing to recognize that datasets need to be more diverse but ignore systemic issues behind these systems. People are eager to make things more diverse without thinking critically about whether things like gender recognition or other facial recognition software are necessary or ethical.
tldr: Making technology fair goes beyond creating technologies that work equally well on everyone. Every technology and data set involves real people on both sides and can perpetuate systemic biases or violate civil liberties despite how well they work in quantitative measures as technology can only amplify an human intent or issues.
Question
How do you think the public panic around things like Chat GPT and other similar recent AI systems has obscured the real dangers you discussed in the talk? Is this cultural moment an opportunity to engage the public on these issues since there is a growing discussion around the dangers of AI or are broader issues being ignored in this discussion?
Part 2: Dr. Gebru’s Talk at Middlebury
Dr. Gebru’s talk this past Monday focused on how Artificial Generalized Intelligence (AGI) relates and supports the second wave eugenics movement. To start, Dr. Gebru stated the mission of her research organization as both mitigating current harm in the field of AI and imagining and executing a different technological future. She stated some of the immediate dangers of AI including worker exploitation, data theft, and unfair compensation systems before introducing the promises made by billionaires in the industry.
These promises introduced the idea of a tech Utopia that was a common theme throughout the lecture. The wealthy people behind some of these large models and tech companies promise that in the near future there will be infinite wealth, no need to work, and all of the world’s problems will be solved. These ideas brought the conversation to the topic of eugenics and Dr. Gebru summarized some of the history of the first and second wave eugenics movement. She connected what she called the TESCREAL Bundle (some of the most commmon schools of thought in second wave eugenics) with the promises of AI and the binary visions of the future as an AI Utopia or AI Apocalypse.
Many of the beliefs in teh TESCREAL Bundle centered around positive eugenics ideals becoming better humans, or transcending humanity altogether, using technology. It is promised that AGI, vaguely defined as an AI system with intellectual abilities greater than a human’s, will completely overhaul our species, world, and even universe. The goal is for a singular model to be everything to everyone. For one model to be able to answer any question or, perhaps more importantly, for everyone in the world to pay for one model to answer every question. In essence, an AI model to play God.
Dr. Gebru concluded her lecture with her own vision of the future of AI. Instead of developing increasingly large models with increasingly large data sets, Dr. Gebru believes that the field should focus on smaller, specific models trained on curated data sets that are sources ethically.
Reflecting on the talk as I am concluding my own undergraduate experience at Middlebury studying computer science is very interesting. I can think of many interactions I have had with other students within and outside of the Computer Science department about Artificial Intelligence and so much of the panic surrounding things like Chat GPT or the promises of never having to do homework again echo the sentiments expressed by Dr. Gebru. I agree with her about the false dichotomy of AI Utopia and Apocalypse and never quite understood how to express those ideas until she articulated them so clearly. I also think she emphasized an important point about the workers involved in the creations of these large models. I think that people often jsut imagine some coder making these systems and ignore the labor, often exported to the global south, that is essential to their creation. The idea that these huge models could erase labor is something repeated again and again, and I believe on of the biggest takeaways from Dr. Gebru’s talk is that labor is never erased, only shifted to people who usually have less agency and are compensated less fairly.
Part 3
Overall, it was a great privledge to be able to listen to and interact with Dr. Gebru. For one, I have heard about Dr. Gebru’s work in multiple classes at Middlebury so it was just exciting to hear her speak. Putting a face to some of the issues that we have been discussing throughout Machine Learning was also very powerful as I think sometimes it iseasy to come up with examples of people who work at Google, Facebook, etc. but harder to name people working in other more social good oreinted computer science fields. It was empowering to hear her discuss imposter syndrome as something external namely racism and sexism, rather than an internal defect of some sort since as a woman in computer science, I have been frustrated by some campaigns or messaging for women to just be more confident in order to succeed. Additionally, it was empowering for me to see some of the areas which need people and what important issues are out there since I am not particularly interested in working as a software engineer or something similar and it can be difficult to have a vision for the future in technology without many role models. It is frustrating to hear some of the challenges that Dr. Gebru still faces in the industry and to hear her own exhasution with the Computer Vision field but hopefully, as such a trailblazer, she has inspired many other people to follow in her footsteps and approach problems with new vigor. One additional thing I have difficulty with is how to communicate with people who do see AI as this God-like force without sounding like a Ludite as sometimes I feel that men in computer science just dismiss concerns or critiques of the industry as anti-progress or uninformed.